Product Code Database
Example Keywords: hair -angry $3
barcode-scavenger
   » » Wiki: Multivariate Random Variable
Tag Wiki 'Multivariate Random Variable'.
Tag

In probability, and , a multivariate random variable or random vector is a list or vector of mathematical variables each of whose value is unknown, either because the value has not yet occurred or because there is imperfect knowledge of its value. The individual variables in a random vector are grouped together because they are all part of a single mathematical system — often they represent different properties of an individual . For example, while a given person has a specific age, height and weight, the representation of these features of an unspecified person from within a group would be a random vector. Normally each element of a random vector is a .

Random vectors are often used as the underlying implementation of various types of aggregate , e.g. a , , , stochastic process, etc.

Formally, a multivariate random variable is a \mathbf{X} = (X_1,\dots,X_n)^\mathsf{T} (or its , which is a ) whose components are on the probability space (\Omega, \mathcal{F}, P), where \Omega is the , \mathcal{F} is the (the collection of all events), and P is the probability measure (a function returning each event's ).


Probability distribution
Every random vector gives rise to a probability measure on \mathbb{R}^n with the as the underlying sigma-algebra. This measure is also known as the joint probability distribution, the joint distribution, or the multivariate distribution of the random vector.

The distributions of each of the component random variables X_i are called marginal distributions. The conditional probability distribution of X_i given X_j is the probability distribution of X_i when X_j is known to be a particular value.

The cumulative distribution function F_{\mathbf{X}} : \R^n \mapsto 0,1 of a random vector \mathbf{X}=(X_1,\dots,X_n)^\mathsf{T} is defined as

(2025). 9781107039759, Cambridge University Press.

|cellpadding= 6 |border |border colour = #0073CF

where \mathbf{x} = (x_1, \dots, x_n)^\mathsf{T}.


Operations on random vectors
Random vectors can be subjected to the same kinds of algebraic operations as can non-random vectors: addition, subtraction, multiplication by a scalar, and the taking of .


Affine transformations
Similarly, a new random vector \mathbf{Y} can be defined by applying an affine transformation g\colon \mathbb{R}^n \to \mathbb{R}^n to a random vector \mathbf{X}:

\mathbf{Y}=\mathbf{A}\mathbf{X}+b, where \mathbf{A} is an n \times n matrix and b is an n \times 1 column vector.

If \mathbf{A} is an invertible matrix and \textstyle\mathbf{X} has a probability density function f_{\mathbf{X}}, then the probability density of \mathbf{Y} is

f_{\mathbf{Y}}(y)=\frac{f_{\mathbf{X}}(\mathbf{A}^{-1}(y-b))}
.


Invertible mappings
More generally we can study invertible mappings of random vectors.
(2025). 9781981369195, CreateSpace Independent Publishing Platform.

Let g be a one-to-one mapping from an open subset \mathcal{D} of \mathbb{R}^n onto a subset \mathcal{R} of \mathbb{R}^n, let g have continuous partial derivatives in \mathcal{D} and let the Jacobian determinant \det\left (\frac{\partial \mathbf{y}}{\partial \mathbf{x}}\right ) of g be zero at no point of \mathcal{D}. Assume that the real random vector \mathbf{X} has a probability density function f_{\mathbf{X}}(\mathbf{x}) and satisfies P(\mathbf{X} \in \mathcal{D}) = 1. Then the random vector \mathbf{Y}=g(\mathbf{X}) is of probability density

\left. f_{\mathbf{Y}}(\mathbf{y})=\frac{f_{\mathbf{X}}(\mathbf{x})}{\left |\det\left (\frac{\partial \mathbf{y}}{\partial \mathbf{x}}\right )\right |} \right |_{\mathbf{x}=g^{-1}(\mathbf{y})} \mathbf{1}(\mathbf{y} \in R_\mathbf{Y})

where \mathbf{1} denotes the indicator function and set R_\mathbf{Y} = \{ \mathbf{y} = g(\mathbf{x}): f_{\mathbf{X}}(\mathbf{x}) > 0 \} \subseteq \mathcal{R} denotes support of \mathbf{Y}.


Expected value
The or mean of a random vector \mathbf{X} is a fixed vector \operatorname{E}\mathbf{X} whose elements are the expected values of the respective random variables.
(2025). 9780521864701, Cambridge University Press.


Covariance and cross-covariance

Definitions
The covariance matrix (also called second central moment or variance-covariance matrix) of an n \times 1 random vector is an n \times n matrix whose ( i,j)th element is the between the i th and the j th random variables. The covariance matrix is the expected value, element by element, of the n \times n matrix computed as \mathbf{X}-\operatorname{E}[\mathbf{X}] \mathbf{X}-\operatorname{E}[\mathbf{X}]^T, where the superscript T refers to the transpose of the indicated vector:

|cellpadding= 6 |border |border colour = #0073CF

By extension, the cross-covariance matrix between two random vectors \mathbf{X} and \mathbf{Y} (\mathbf{X} having n elements and \mathbf{Y} having p elements) is the n \times p matrix

|cellpadding= 6 |border |border colour = #0073CF

where again the matrix expectation is taken element-by-element in the matrix. Here the ( i,j)th element is the covariance between the i th element of \mathbf{X} and the j th element of \mathbf{Y}.


Properties
The covariance matrix is a , i.e.
\operatorname{K}_{\mathbf{X}\mathbf{X}}^T = \operatorname{K}_{\mathbf{X}\mathbf{X}}.

The covariance matrix is a positive semidefinite matrix, i.e.

\mathbf{a}^T \operatorname{K}_{\mathbf{X}\mathbf{X}} \mathbf{a} \ge 0 \quad \text{for all } \mathbf{a} \in \mathbb{R}^n.

The cross-covariance matrix \operatorname{Cov}\mathbf{Y},\mathbf{X} is simply the transpose of the matrix \operatorname{Cov}\mathbf{X},\mathbf{Y}, i.e.

\operatorname{K}_{\mathbf{Y}\mathbf{X}} = \operatorname{K}_{\mathbf{X}\mathbf{Y}}^T.


Uncorrelatedness
Two random vectors \mathbf{X}=(X_1,...,X_m)^T and \mathbf{Y}=(Y_1,...,Y_n)^T are called uncorrelated if
\operatorname{E}\mathbf{X} = \operatorname{E}\mathbf{X}\operatorname{E}\mathbf{Y}^T.

They are uncorrelated if and only if their cross-covariance matrix \operatorname{K}_{\mathbf{X}\mathbf{Y}} is zero.


Correlation and cross-correlation

Definitions
The correlation matrix (also called second moment) of an n \times 1 random vector is an n \times n matrix whose ( i,j)th element is the correlation between the i th and the j th random variables. The correlation matrix is the expected value, element by element, of the n \times n matrix computed as \mathbf{X} \mathbf{X}^T, where the superscript T refers to the transpose of the indicated vector:
(1991). 9780070484771, McGraw-Hill.

|cellpadding= 6 |border |border colour = #0073CF

By extension, the cross-correlation matrix between two random vectors \mathbf{X} and \mathbf{Y} (\mathbf{X} having n elements and \mathbf{Y} having p elements) is the n \times p matrix

|cellpadding= 6 |border |border colour = #0073CF


Properties
The correlation matrix is related to the covariance matrix by
\operatorname{R}_{\mathbf{X}\mathbf{X}} = \operatorname{K}_{\mathbf{X}\mathbf{X}} + \operatorname{E}\mathbf{X}\operatorname{E}\mathbf{X}^T.
Similarly for the cross-correlation matrix and the cross-covariance matrix:
\operatorname{R}_{\mathbf{X}\mathbf{Y}} = \operatorname{K}_{\mathbf{X}\mathbf{Y}} + \operatorname{E}\mathbf{X}\operatorname{E}\mathbf{Y}^T


Orthogonality
Two random vectors of the same size \mathbf{X}=(X_1,...,X_n)^T and \mathbf{Y}=(Y_1,...,Y_n)^T are called orthogonal if
\operatorname{E}\mathbf{X}^T = 0.


Independence
Two random vectors \mathbf{X} and \mathbf{Y} are called independent if for all \mathbf{x} and \mathbf{y}
F_{\mathbf{X,Y}}(\mathbf{x,y}) = F_{\mathbf{X}}(\mathbf{x}) \cdot F_{\mathbf{Y}}(\mathbf{y})
where F_{\mathbf{X}}(\mathbf{x}) and F_{\mathbf{Y}}(\mathbf{y}) denote the cumulative distribution functions of \mathbf{X} and \mathbf{Y} andF_{\mathbf{X,Y}}(\mathbf{x,y}) denotes their joint cumulative distribution function. Independence of \mathbf{X} and \mathbf{Y} is often denoted by \mathbf{X} \perp\!\!\!\perp \mathbf{Y}. Written component-wise, \mathbf{X} and \mathbf{Y} are called independent if for all x_1,\ldots,x_m,y_1,\ldots,y_n
F_{X_1,\ldots,X_m,Y_1,\ldots,Y_n}(x_1,\ldots,x_m,y_1,\ldots,y_n) = F_{X_1,\ldots,X_m}(x_1,\ldots,x_m) \cdot F_{Y_1,\ldots,Y_n}(y_1,\ldots,y_n).


Characteristic function
The characteristic function of a random vector \mathbf{X} with n components is a function \mathbb{R}^n \to \mathbb{C} that maps every vector \mathbf{\omega} = (\omega_1,\ldots,\omega_n)^T to a complex number. It is defined by

\varphi_{\mathbf{X}}(\mathbf{\omega}) = \operatorname{E} \left = \operatorname{E} \left .


Further properties

Expectation of a quadratic form
One can take the expectation of a quadratic form in the random vector \mathbf{X} as follows:
(1981). 9780070339620, McGraw-Hill.

\operatorname{E}\mathbf{X}^{T}A\mathbf{X} = \operatorname{E}\mathbf{X}^{T}A\operatorname{E}\mathbf{X} + \operatorname{tr}(A K_{\mathbf{X}\mathbf{X}}),

where K_{\mathbf{X}\mathbf{X}} is the covariance matrix of \mathbf{X} and \operatorname{tr} refers to the trace of a matrix — that is, to the sum of the elements on its (from upper left to lower right). Since the quadratic form is a scalar, so is its expectation.

Proof: Let \mathbf{z} be an m \times 1 random vector with \operatorname{E}\mathbf{z} = \mu and \operatorname{Cov}\mathbf{z}= V and let A be an m \times m non-stochastic matrix.

Then based on the formula for the covariance, if we denote \mathbf{z}^T = \mathbf{X} and \mathbf{z}^T A^T = \mathbf{Y}, we see that:

\operatorname{Cov}\mathbf{X},\mathbf{Y} = \operatorname{E}\mathbf{X}\mathbf{Y}^T-\operatorname{E}\mathbf{X}\operatorname{E}\mathbf{Y}^T

Hence

\begin{align}
\operatorname{E}XY^T &= \operatorname{Cov}X,Y+\operatorname{E}X\operatorname{E}Y^T \\ \operatorname{E}z^T &= \operatorname{Cov}z^T,z^T + \operatorname{E}z^T\operatorname{E}z^T^T \\ &=\operatorname{Cov}z^T + \mu^T (\mu^T A^T)^T \\ &=\operatorname{Cov}z^T + \mu^T A \mu , \end{align}

which leaves us to show that

\operatorname{Cov}z^T=\operatorname{tr}(AV).

This is true based on the fact that one can cyclically permute matrices when taking a trace without changing the end result (e.g.: \operatorname{tr}(AB) = \operatorname{tr}(BA)).

We see that

\begin{align}
\operatorname{Cov}z^T,z^T &= \operatorname{E} \left\left(z^T \\ &= \operatorname{E} \left\\ &= \operatorname{E} \left. \end{align}

And since

\left( {z - \mu } \right)^T \left( {Az - A\mu } \right)

is a scalar, then

(z - \mu)^T ( Az - A\mu)= \operatorname{tr}\left( {(z - \mu )^T (Az - A\mu )} \right) = \operatorname{tr} \left((z - \mu )^T A(z - \mu ) \right)

trivially. Using the permutation we get:

\operatorname{tr}\left( {(z - \mu )^T A(z - \mu )} \right) = \operatorname{tr}\left( {A(z - \mu )(z - \mu )^T} \right),

and by plugging this into the original formula we get:

\begin{align}
\operatorname{Cov} \left &= E\left \\ &= E \left \\ &= \operatorname{tr} \left( {A \cdot \operatorname{E} \left((z - \mu )(z - \mu )^T \right) } \right) \\ &= \operatorname{tr} (A V). \end{align}


Expectation of the product of two different quadratic forms
One can take the expectation of the product of two different quadratic forms in a zero-mean random vector \mathbf{X} as follows:

\operatorname{E}\left(\mathbf{X}^{T}A\mathbf{X})(\mathbf{X}^{T}B\mathbf{X})\right = 2\operatorname{tr}(A K_{\mathbf{X}\mathbf{X}} B K_{\mathbf{X}\mathbf{X}}) + \operatorname{tr}(A K_{\mathbf{X}\mathbf{X}})\operatorname{tr}(B K_{\mathbf{X}\mathbf{X}})

where again K_{\mathbf{X}\mathbf{X}} is the covariance matrix of \mathbf{X}. Again, since both quadratic forms are scalars and hence their product is a scalar, the expectation of their product is also a scalar.


Applications

Portfolio theory
In in , an objective often is to choose a portfolio of risky assets such that the distribution of the random portfolio return has desirable properties. For example, one might want to choose the portfolio return having the lowest variance for a given expected value. Here the random vector is the vector \mathbf{r} of random returns on the individual assets, and the portfolio return p (a random scalar) is the inner product of the vector of random returns with a vector w of portfolio weights — the fractions of the portfolio placed in the respective assets. Since p = wT\mathbf{r}, the expected value of the portfolio return is wTE(\mathbf{r}) and the variance of the portfolio return can be shown to be wTC w, where C is the covariance matrix of \mathbf{r}.


Regression theory
In linear regression theory, we have data on n observations on a dependent variable y and n observations on each of k independent variables xj. The observations on the dependent variable are stacked into a column vector y; the observations on each independent variable are also stacked into column vectors, and these latter column vectors are combined into a X (not denoting a random vector in this context) of observations on the independent variables. Then the following regression equation is postulated as a description of the process that generated the data:

y = X \beta + e,

where β is a postulated fixed but unknown vector of k response coefficients, and e is an unknown random vector reflecting random influences on the dependent variable. By some chosen technique such as ordinary least squares, a vector \hat \beta is chosen as an estimate of β, and the estimate of the vector e, denoted \hat e, is computed as

\hat e = y - X \hat \beta.

Then the statistician must analyze the properties of \hat \beta and \hat e, which are viewed as random vectors since a randomly different selection of n cases to observe would have resulted in different values for them.


Vector time series
The evolution of a k×1 random vector \mathbf{X} through time can be modelled as a vector autoregression (VAR) as follows:

\mathbf{X}_t = c + A_1 \mathbf{X}_{t-1} + A_2 \mathbf{X}_{t-2} + \cdots + A_p \mathbf{X}_{t-p} + \mathbf{e}_t, \,

where the i-periods-back vector observation \mathbf{X}_{t-i} is called the i-th lag of \mathbf{X}, c is a k × 1 vector of constants (), Ai is a time-invariant k ×  k matrix and \mathbf{e}_t is a k × 1 random vector of error terms.


Further reading

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs
1s Time